4,749 research outputs found

    A comparative study of different strategies of batch effect removal in microarray data: a case study of three datasets

    Get PDF
    Batch effects refer to the systematic non-biological variability that is introduced by experimental design and sample processing in microarray experiments. It is a common issue in microarray data and could introduce bias into the analysis, if ignored. Many batch effect removal methods have been developed. Previous comparative work has been focused on their effectiveness of batch effects removal and impact on downstream classification analysis. The most common type of analysis for microarray data is differential expression (DE) analysis, yet no study has examined the impact of these methods on downstream DE analysis, which identifies markers that are significantly associated with the outcome of interest. In this project, we investigated the performance of five popular batch effect removal methods, mean-centering, ComBat_p, ComBat_n, SVA, and ratio based methods, on batch effects reduction and their impact on DE analysis using three experimental datasets with different sources of batch effects. We found that the performance of these methods is data-dependent: simple mean-centering method performed reasonably well in all three datasets, but the more complicated algorithms such as ComBat method’s performance could be unstable for certain dataset and should be applied with caution. Given a new dataset, we recommend either using the mean-centering method or carefully investigating a few different batch removal methods and choosing the one that is the best for the data, if possible. This study has important public health significance because better handling of batch effect in microarray data can reduce biased results and lead to improved biomarker identification

    Gradient metasurfaces: a review of fundamentals and applications

    Full text link
    In the wake of intense research on metamaterials the two-dimensional analogue, known as metasurfaces, has attracted progressively increasing attention in recent years due to the ease of fabrication and smaller insertion losses, while enabling an unprecedented control over spatial distributions of transmitted and reflected optical fields. Metasurfaces represent optically thin planar arrays of resonant subwavelength elements that can be arranged in a strictly or quasi periodic fashion, or even in an aperiodic manner, depending on targeted optical wavefronts to be molded with their help. This paper reviews a broad subclass of metasurfaces, viz. gradient metasurfaces, which are devised to exhibit spatially varying optical responses resulting in spatially varying amplitudes, phases and polarizations of scattered fields. Starting with introducing the concept of gradient metasurfaces, we present classification of different metasurfaces from the viewpoint of their responses, differentiating electrical-dipole, geometric, reflective and Huygens' metasurfaces. The fundamental building blocks essential for the realization of metasurfaces are then discussed in order to elucidate the underlying physics of various physical realizations of both plasmonic and purely dielectric metasurfaces. We then overview the main applications of gradient metasurfaces, including waveplates, flat lenses, spiral phase plates, broadband absorbers, color printing, holograms, polarimeters and surface wave couplers. The review is terminated with a short section on recently developed nonlinear metasurfaces, followed by the outlook presenting our view on possible future developments and perspectives for future applications.Comment: Accepted for publication in Reports on Progress in Physic

    Design, control and error analysis of a fast tool positioning system for ultra-precision machining of freeform surfaces

    Get PDF
    This thesis was previously held under moratorium from 03/12/19 to 03/12/21Freeform surfaces are widely found in advanced imaging and illumination systems, orthopaedic implants, high-power beam shaping applications, and other high-end scientific instruments. They give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques—especially in the optical and display market where large surfaces with tens of thousands of micro features are to be machined. Such highly wavy surfaces require the machine tool cutter to move rapidly while keeping following errors small. Manufacturing efficiency has been a bottleneck in these applications. The rapidly changing cutting forces and inertial forces also contribute a great deal to the machining errors. The difficulty in maintaining good surface quality under conditions of high operational frequency suggests the need for an error analysis approach that can predict the dynamic errors. The machining requirements also impose great challenges on machine tool design and the control process. There has been a knowledge gap on how the mechanical structural design affects the achievable positioning stability. The goal of this study was to develop a tool positioning system capable of delivering fast motion with the required positioning accuracy and stiffness for ultra-precision freeform manufacturing. This goal is achieved through deterministic structural design, detailed error analysis, and novel control algorithms. Firstly, a novel stiff-support design was proposed to eliminate the structural and bearing compliances in the structural loop. To implement the concept, a fast positioning device was developed based on a new-type flat voice coil motor. Flexure bearing, magnet track, and motor coil parameters were designed and calculated in detail. A high-performance digital controller and a power amplifier were also built to meet the servo rate requirement of the closed-loop system. A thorough understanding was established of how signals propagated within the control system, which is fundamentally important in determining the loop performance of high-speed control. A systematic error analysis approach based on a detailed model of the system was proposed and verified for the first time that could reveal how disturbances contribute to the tool positioning errors. Each source of disturbance was treated as a stochastic process, and these disturbances were synthesised in the frequency domain. The differences between following error and real positioning error were discussed and clarified. The predicted spectrum of following errors agreed with the measured spectrum across the frequency range. It is found that the following errors read from the control software underestimated the real positioning errors at low frequencies and overestimated them at high frequencies. The error analysis approach thus successfully revealed the real tool positioning errors that are mingled with sensor noise. Approaches to suppress disturbances were discussed from the perspectives of both system design and control. A deterministic controller design approach was developed to preclude the uncertainty associated with controller tuning, resulting in a control law that can minimize positioning errors. The influences of mechanical parameters such as mass, damping, and stiffness were investigated within the closed-loop framework. Under a given disturbance condition, the optimal bearing stiffness and optimal damping coefficients were found. Experimental positioning tests showed that a larger moving mass helped to combat all disturbances but sensor noise. Because of power limits, the inertia of the fast tool positioning system could not be high. A control algorithm with an additional acceleration-feedback loop was then studied to enhance the dynamic stiffness of the cutting system without any need for large inertia. An analytical model of the dynamic stiffness of the system with acceleration feedback was established. The dynamic stiffness was tested by frequency response tests as well as by intermittent diamond-turning experiments. The following errors and the form errors of the machined surfaces were compared with the estimates provided by the model. It is found that the dynamic stiffness within the acceleration sensor bandwidth was proportionally improved. The additional acceleration sensor brought a new error source into the loop, and its contribution of errors increased with a larger acceleration gain. At a certain point, the error caused by the increased acceleration gain surpassed other disturbances and started to dominate, representing the practical upper limit of the acceleration gain. Finally, the developed positioning system was used to cut some typical freeform surfaces. A surface roughness of 1.2 nm (Ra) was achieved on a NiP alloy substrate in flat cutting experiments. Freeform surfaces—including beam integrator surface, sinusoidal surface, and arbitrary freeform surface—were successfully machined with optical-grade quality. Ideas for future improvements were proposed in the end of this thesis.Freeform surfaces are widely found in advanced imaging and illumination systems, orthopaedic implants, high-power beam shaping applications, and other high-end scientific instruments. They give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques—especially in the optical and display market where large surfaces with tens of thousands of micro features are to be machined. Such highly wavy surfaces require the machine tool cutter to move rapidly while keeping following errors small. Manufacturing efficiency has been a bottleneck in these applications. The rapidly changing cutting forces and inertial forces also contribute a great deal to the machining errors. The difficulty in maintaining good surface quality under conditions of high operational frequency suggests the need for an error analysis approach that can predict the dynamic errors. The machining requirements also impose great challenges on machine tool design and the control process. There has been a knowledge gap on how the mechanical structural design affects the achievable positioning stability. The goal of this study was to develop a tool positioning system capable of delivering fast motion with the required positioning accuracy and stiffness for ultra-precision freeform manufacturing. This goal is achieved through deterministic structural design, detailed error analysis, and novel control algorithms. Firstly, a novel stiff-support design was proposed to eliminate the structural and bearing compliances in the structural loop. To implement the concept, a fast positioning device was developed based on a new-type flat voice coil motor. Flexure bearing, magnet track, and motor coil parameters were designed and calculated in detail. A high-performance digital controller and a power amplifier were also built to meet the servo rate requirement of the closed-loop system. A thorough understanding was established of how signals propagated within the control system, which is fundamentally important in determining the loop performance of high-speed control. A systematic error analysis approach based on a detailed model of the system was proposed and verified for the first time that could reveal how disturbances contribute to the tool positioning errors. Each source of disturbance was treated as a stochastic process, and these disturbances were synthesised in the frequency domain. The differences between following error and real positioning error were discussed and clarified. The predicted spectrum of following errors agreed with the measured spectrum across the frequency range. It is found that the following errors read from the control software underestimated the real positioning errors at low frequencies and overestimated them at high frequencies. The error analysis approach thus successfully revealed the real tool positioning errors that are mingled with sensor noise. Approaches to suppress disturbances were discussed from the perspectives of both system design and control. A deterministic controller design approach was developed to preclude the uncertainty associated with controller tuning, resulting in a control law that can minimize positioning errors. The influences of mechanical parameters such as mass, damping, and stiffness were investigated within the closed-loop framework. Under a given disturbance condition, the optimal bearing stiffness and optimal damping coefficients were found. Experimental positioning tests showed that a larger moving mass helped to combat all disturbances but sensor noise. Because of power limits, the inertia of the fast tool positioning system could not be high. A control algorithm with an additional acceleration-feedback loop was then studied to enhance the dynamic stiffness of the cutting system without any need for large inertia. An analytical model of the dynamic stiffness of the system with acceleration feedback was established. The dynamic stiffness was tested by frequency response tests as well as by intermittent diamond-turning experiments. The following errors and the form errors of the machined surfaces were compared with the estimates provided by the model. It is found that the dynamic stiffness within the acceleration sensor bandwidth was proportionally improved. The additional acceleration sensor brought a new error source into the loop, and its contribution of errors increased with a larger acceleration gain. At a certain point, the error caused by the increased acceleration gain surpassed other disturbances and started to dominate, representing the practical upper limit of the acceleration gain. Finally, the developed positioning system was used to cut some typical freeform surfaces. A surface roughness of 1.2 nm (Ra) was achieved on a NiP alloy substrate in flat cutting experiments. Freeform surfaces—including beam integrator surface, sinusoidal surface, and arbitrary freeform surface—were successfully machined with optical-grade quality. Ideas for future improvements were proposed in the end of this thesis

    Unsupervised Contrastive Representation Learning for Knowledge Distillation and Clustering

    Get PDF
    Unsupervised contrastive learning has emerged as an important training strategy to learn representation by pulling positive samples closer and pushing negative samples apart in low-dimensional latent space. Usually, positive samples are the augmented versions of the same input and negative samples are from different inputs. Once the low-dimensional representations are learned, further analysis, such as clustering, and classification can be performed using the representations. Currently, there are two challenges in this framework. First, the empirical studies reveal that even though contrastive learning methods show great progress in representation learning on large model training, they do not work well for small models. Second, this framework has achieved excellent clustering results on small datasets but has limitations on the datasets with a large number of clusters such as ImageNet. In this dissertation, our research goal is to develop new unsupervised contrastive representation learning methods and apply them to knowledge distillation and clustering. The knowledge distillation transfers knowledge from high-capacity teachers to small student models and then improves the performance of students. And the representational knowledge distillation methods try to distill the knowledge of representations from teachers to students. Current representational knowledge distillation methods undesirably push apart representations of samples from the same class in their correlation objectives, leading to inferior distillation results. Here, we introduce Dual-level Knowledge Distillation (DLKD) by explicitly combining knowledge alignment and knowledge correlation instead of using one single contrastive objective. We show that both knowledge alignment and knowledge correlation are necessary to improve distillation performance. The proposed DLKD is task-agnostic and model-agnostic and enables effective knowledge transfer from supervised or self-supervised trained teachers to students. Experiments demonstrate that DLKD outperforms other state-of-the-art methods in a large number of experimental settings including different (a) pretraining strategies (b) network architectures (c) datasets and (d) tasks. Currently, the two-stage framework is widely used in deep learning-based clustering, namely, learning representation first, then clustering algorithms, such as K-means, are usually performed on representations to obtain cluster assignment. However, the learned representation may not be optimized for clustering in this two-stage framework. Here, we propose Contrastive Learning-based Clustering (CLC), which uses contrastive learning to directly learn cluster assignment. We decompose the representation into two parts: one encodes the categorical information under an equipartition constraint, and the other captures the instance-wise factors. We theoretically analyze the proposed contrastive loss and reveal that CLC sets different weights for the negative samples while learning cluster assignments. Therefore, the proposed loss has high expressiveness that enables us to efficiently learn cluster assignments. Experimental evaluation shows that CLC achieves overall state-of-the-art or highly competitive clustering performance on multiple benchmark datasets. In particular, we achieve 53.4% accuracy on the full ImageNet dataset and outperform existing methods by large margins (+ 10.2%)

    On-chip spectropolarimetry by fingerprinting with random surface arrays of nanoparticles

    Full text link
    Optical metasurfaces revolutionized the approach to moulding the propagation of light by enabling simultaneous control of the light phase, momentum, amplitude and polarization. Thus, instantaneous spectropolarimetry became possible by conducting parallel intensity measurements of differently diffracted optical beams. Various implementations of this very important functionality have one feature in common - the determination of wavelength utilizes dispersion of the diffraction angle, requiring tracking the diffracted beams in space. Realization of on-chip spectropolarimetry calls thereby for conceptually different approaches. In this work, we demonstrate that random nanoparticle arrays on metal surfaces, enabling strong multiple scattering of surface plasmon polaritons (SPPs), produce upon illumination complicated SPP scattered patterns, whose angular spectra are uniquely determined by the polarization and wavelength of light, representing thereby spectropolarimetric fingerprints. Using um-sized circular arrays of randomly distributed {\mu}m-sized gold nanoparticles (density ~ 75 {\mu}m−^-2^2}) fabricated on gold films, we measure angular distributions of scattered SPP waves using the leakage radiation microscopy and find that the angular SPP spectra obtained for normally incident light beams different in wavelength and/or polarization are distinctly different. Our approach allows one to realize on-chip spectropolarimetry by fingerprinting using surface nanostructures fabricated with simple one-step electron-beam lithography.Comment: 22 pages, 5 figure

    Measurement of spindle error motions by an improved multi-probe method

    Get PDF
    This paper proposes an improved multi-probe method for measurement of spindle error motions. Four degree of freedoms (DOFs) of error motions of a spindle are measured in a dedicated setup using capacitive sensors. Three sets of probe angle set are carefully selected in order to overcome the harmonic suppression problems commonly encountered in the multi-probe measurement approach. The error contribution of each set of angles is analysed and then the measurement results are modified in frequency domain so as to minimise the effect of harmonic suppression. The evaluation of measurement results shows that this method is effective and possesses good agreement with repeated measurements
    • …
    corecore